Camera Vision and Inertial Measurement Unit Sensor Fusion for Lane Detection and Tracking using Polynomial Bounding Curves

نویسندگان

  • Christopher Rose
  • David M. Bevly
چکیده

This paper studies a technique for combining vision and inertial measurement unit (IMU) data to increase the reliability of lane departure warning systems. In this technique, 2 nd order polynomials are used to model the likelihood area of the location of the lane marking position in the image as well as the lane itself. An IMU is used to predict the drift of these polynomials and the estimated lane marking when the lane markings can not be detected in the image. Subsequent frames where the lane marking is present results in faster convergence of the model on the lane marking due to a reduced number of detected erroneous lines. A technique to reduce the affect of untracked lane markings has been employed which bounds the previously detected 2nd order polynomial with two other polynomials within which lies the likelihood region of the next frame’s lane marking. These bounds employ similar characteristics as the original line; therefore, the lane marking should be detected within the bounded area given smooth transitions between each frame. An inertial measurement unit can provide accelerations and rotation rates of a vehicle. Using an extended Kalman filter, information from the IMU can be blended with the last known coefficients of the estimated lane marking to approximate the lane marking coefficients until the lane is detected. A measurement of the position within the lane can be carried out by determining the number of pixels from the center of the image and the estimated lane marking. This measurement value can then be converted to its real world equivalent and used to estimate the position of the vehicle within the lane.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Fast and Stable Tracking for AR fusing Video and Inertial Sensor Data

Accurate acquisition of camera position and orientation is crucial for realistic augmentations of camera images. Computer vision based tracking algorithms, using the camera itself as sensor, are known to be very accurate but also time-consuming. The integration of inertial sensor data provides a camera pose update at 100 Hz and therefore stability and robustness against rapid motion and occlusi...

متن کامل

Metrology and Measurement Systems

The contribution presents a novel approach to the detection and tracking of lanes based on lidar data. Therefore, we use the distance and reflectivity data coming from a one-dimensional sensor. After having detected the lane through a temporal fusion algorithm, we register the lidar data in a world-fixed coordinate system. To this end, we also incorporate the data coming from an inertial measur...

متن کامل

Vehicle Lane Position Estimation with Camera Vision using Bounded Polynomial Interpolated Lines

Applications of camera vision, such as lane departure warning systems, are limited by the quality of the frame image and the information contained within each frame. One common feature extraction technique in image processing is the use of the Hough transform, which can be used to extract lines from an image. The detected lane marking lines are used in the interpolation of a 2 nd order polynomi...

متن کامل

A Real-time Motion Tracking Wireless System for Upper Limb Exosuit Based on Inertial Measurement Units and Flex Sensors (TECHNICAL NOTE)

This paper puts forward a real-time angular tracking (motion capture) system for a low cost upper limb exosuit based on sensor fusion; which is integrated by an elastic sleeve-mitten, two inertial measurement units (IMU), two flex sensors and a wireless communication system. The device can accurately detect the angular position of the shoulder (flexion-extension, abduction-adduction and interna...

متن کامل

LIDAR, Camera, and Inertial Sensor Based Navigation and Positioning Techniques for Advances ITS Applications

Sensor fusion techniques have been used for years to combine sensory data from disparate sources. This dissertation focuses on LIDAR, camera and inertial sensors based navigation and vehicle positioning techniques. First of all, a unique multi-planar LIDAR and computer vision calibration algorithm is proposed. This approach requires the camera and LIDAR to observe a planar pattern. Then the geo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011